17 research outputs found

    Adaptive Non-uniform Compressive Sampling for Time-varying Signals

    Full text link
    In this paper, adaptive non-uniform compressive sampling (ANCS) of time-varying signals, which are sparse in a proper basis, is introduced. ANCS employs the measurements of previous time steps to distribute the sensing energy among coefficients more intelligently. To this aim, a Bayesian inference method is proposed that does not require any prior knowledge of importance levels of coefficients or sparsity of the signal. Our numerical simulations show that ANCS is able to achieve the desired non-uniform recovery of the signal. Moreover, if the signal is sparse in canonical basis, ANCS can reduce the number of required measurements significantly.Comment: 6 pages, 8 figures, Conference on Information Sciences and Systems (CISS 2017) Baltimore, Marylan

    Robust and Scalable Data Representation and Analysis Leveraging Isometric Transformations and Sparsity

    Get PDF
    The main focus of this doctoral thesis is to study the problem of robust and scalable data representation and analysis. The success of any machine learning and signal processing framework relies on how the data is represented and analyzed. Thus, in this work, we focus on three closely related problems: (i) supervised representation learning, (ii) unsupervised representation learning, and (iii) fault tolerant data analysis. For the first task, we put forward new theoretical results on why a certain family of neural networks can become extremely deep and how we can improve this scalability property in a mathematically sound manner. We further investigate how we can employ them to generate data representations that are robust to outliers and to retrieve representative subsets of huge datasets. For the second task, we will discuss two different methods, namely compressive sensing (CS) and nonnegative matrix factorization (NMF). We show that we can employ prior knowledge, such as slow variation in time, to introduce an unsupervised learning component to the traditional CS framework and to learn better compressed representations. Furthermore, we show that prior knowledge and sparsity constraint can be used in the context of NMF, not to find sparse hidden factors, but to enforce other structures, such as piece-wise continuity. Finally, for the third task, we investigate how a data analysis framework can become robust to faulty data and faulty data processors. We employ Bayesian inference and propose a scheme that can solve the CS recovery problem in an asynchronous parallel manner. Furthermore, we show how sparsity can be used to make an optimization problem robust to faulty data measurements. The methods investigated in this work have applications in different practical problems such as resource allocation in wireless networks, source localization, image/video classification, and search engines. A detailed discussion of these practical applications will be presented for each method

    Missing Spectrum-Data Recovery in Cognitive Radio Networks Using Piecewise Constant Nonnegative Matrix Factorization

    Full text link
    In this paper, we propose a missing spectrum data recovery technique for cognitive radio (CR) networks using Nonnegative Matrix Factorization (NMF). It is shown that the spectrum measurements collected from secondary users (SUs) can be factorized as product of a channel gain matrix times an activation matrix. Then, an NMF method with piecewise constant activation coefficients is introduced to analyze the measurements and estimate the missing spectrum data. The proposed optimization problem is solved by a Majorization-Minimization technique. The numerical simulation verifies that the proposed technique is able to accurately estimate the missing spectrum data in the presence of noise and fading.Comment: 6 pages, 6 figures, Accepted for presentation in MILCOM'15 Conferenc

    Robust Target Localization Based on Squared Range Iterative Reweighted Least Squares

    Full text link
    In this paper, the problem of target localization in the presence of outlying sensors is tackled. This problem is important in practice because in many real-world applications the sensors might report irrelevant data unintentionally or maliciously. The problem is formulated by applying robust statistics techniques on squared range measurements and two different approaches to solve the problem are proposed. The first approach is computationally efficient; however, only the objective convergence is guaranteed theoretically. On the other hand, the whole-sequence convergence of the second approach is established. To enjoy the benefit of both approaches, they are integrated to develop a hybrid algorithm that offers computational efficiency and theoretical guarantees. The algorithms are evaluated for different simulated and real-world scenarios. The numerical results show that the proposed methods meet the Cr'amer-Rao lower bound (CRLB) for a sufficiently large number of measurements. When the number of the measurements is small, the proposed position estimator does not achieve CRLB though it still outperforms several existing localization methods.Comment: 2017 IEEE 14th International Conference on Mobile Ad Hoc and Sensor Systems (MASS): http://ieeexplore.ieee.org/document/8108770

    Reliable Spectrum Hole Detection in Spectrum-Heterogeneous Mobile Cognitive Radio Networks via Sequential Bayesian Non-parametric Clustering

    No full text
    In this work, the problem of detecting radio spectrum opportunities in spectrum-heterogeneous cognitive radio networks is addressed. Spectrum opportunities are the frequency channels that are underutilized by the primary licensed users. Thus, by enabling the unlicensed users to detect and utilize them, we can improve the efficiency, reliability, and the flexibility of the radio spectrum usage. The main objective of this work is to discover the spectrum opportunities in time, space, and frequency domains, by proposing a low-cost and practical framework. Spectrum-heterogeneous networks are the networks in which different sensors experience different spectrum opportunities. Thus, the sensing data from sensors cannot be combined to reach consensus and to detect the spectrum opportunities. Moreover, unreliable data, caused by noise or malicious attacks, will deteriorate the performance of the decision-making process. The problem becomes even more challenging when the locations of the sensors are unknown. In this work, a probabilistic model is proposed to cluster the sensors based on their readings, not requiring any knowledge of location of the sensors. The complexity of the model, which is the number of clusters, is automatically inferred from the sensing data. The processing node, also referred to as the base station or the fusion center, infers the probability distributions of cluster memberships, channel availabilities, and devices\u27 reliability in an online manner. After receiving each chunk of sensing data, the probability distributions are updated, without requiring to repeat the computations on previous sensing data. All the update rules are derived mathematically, by employing Bayesian data analysis techniques and variational inference. Furthermore, the inferred probability distributions are employed to assign unique spectrum opportunities to each of the sensors. To avoid interference among the sensors, physically adjacent devices should not utilize the same channels. However, since the location of the devices is not known, cluster membership information is used as a measure of adjacency. This is based on the assumption that the measurements of the devices are spatially correlated. Thus, adjacent devices, which experience similar spectrum opportunities, belong to the same cluster. Then, the problem is mapped into a energy minimization problem and solved via graph cuts. The goal of the proposed graph-theory-based method is to assign each device an available channel, while avoiding interference among neighboring devices. The numerical simulations illustrates the effectiveness of the proposed methods, compared to the existing frameworks

    Upper bounds for integrated information

    Full text link
    Integrated Information Theory (IIT) offers a theoretical framework to quantify the causal irreducibilty of a system, subsets of the units in a system, and the causal relations among the subsets. Specifically, mechanism integrated information quantifies how much of the causal powers of a subset of units in a state, also referred to as a mechanism, cannot be accounted for by its parts. If the causal powers of the mechanism can be fully explained by its parts, it is reducible and its integrated information is zero. Here, we study the upper bound of this measure and how it is achieved. We study mechanisms in isolation, groups of mechanisms, and groups of causal relations among mechanisms. We put forward new theoretical results that show mechanisms that share parts with each other cannot all achieve their maximum. We also introduce techniques to design systems that can maximize the integrated information of a subset of their mechanisms or relations. Our results can potentially be used to exploit the symmetries and constraints to reduce the computations significantly and to compare different connectivity profiles in terms of their maximal achievable integrated information

    Adaptive Non-Uniform Compressive Sampling For Time-Varying Signals

    No full text
    In this paper, adaptive non-uniform compressive sampling (ANCS) of time-varying signals, which are sparse in a proper basis, is introduced. ANCS employs the measurements of previous time steps to distribute the sensing energy among coefficients more intelligently. To this aim, a Bayesian inference method is proposed that does not require any prior knowledge of importance levels of coefficients or sparsity of the signal. Our numerical simulations show that ANCS is able to achieve the desired non-uniform recovery of the signal. Moreover, if the signal is sparse in canonical basis, ANCS can reduce the number of required measurements significantly
    corecore